Assignment 04b: Sleeping habits¶

As a data scientist you will work with ordinal or categorical data as well. What are good methods to visualize such data? What are proper statistics to use? What needs to be done in the data preparation steps to conduct certain analysis. During this week we will work with data from a sleeping study. This dataset is not mandatory. You are encouraged to use data from your project when possible.

Keywords: statistics, categorical data, ordinal data, survey-based study of the sleeping, exploratory data analysis, normalization, hypothesis testing, p-value

More to read:

  • https://fennaf.gitbook.io/bfvm22prog1/

https://machinelearningmastery.com/statistical-hypothesis-tests-in-python-cheat-sheet/ There are a number of cheatsheets and tutorials on the internet. The next overview is a compact overview of tutorials

  • https://www.kaggle.com/hamelg/python-for-data-21-descriptive-statistics
  • https://www.kaggle.com/hamelg/python-for-data-22-probability-distributions
  • https://www.kaggle.com/hamelg/python-for-data-23-confidence-intervals
  • https://www.kaggle.com/hamelg/python-for-data-24-hypothesis-testing
  • https://www.kaggle.com/hamelg/python-for-data-25-chi-squared-tests
  • https://www.kaggle.com/hamelg/python-for-data-26-anova/notebook
  • https://www.kaggle.com/hamelg/python-for-data-27-linear-regression

You will learn about analysing data with pandas and numpy and you will learn to visualize with bokeh. Concretely, you will preprocess the Sleep Study data in an appropiate format in order to conduct statistical and visual analysis.

Learning objectives

  • Conduct exploratory data analysis for categarical data
  • Provide statistical evidence of factor effect
  • Maintain development environment
  • Apply coding standards and FAIR principles

Please add topics you want to learn here: https://padlet.com/ffeenstra1/z9duo25d39dcgezz

About the data¶

The data is collected from a survey-based study of the sleeping habits of individuals within the US.

Below is a description of each of the variables contained within the dataset.

  • Enough = Do you think that you get enough sleep?
  • Hours = On average, how many hours of sleep do you get on a weeknight?
  • PhoneReach = Do you sleep with your phone within arms reach?
  • PhoneTime = Do you use your phone within 30 minutes of falling asleep?
  • Tired = On a scale from 1 to 5, how tired are you throughout the day? (1 being not tired, 5 being very tired)
  • Breakfast = Do you typically eat breakfast?

The two research questions you should answer in this assignment are:

  1. Is there a differences in Hours sleep caused by having breakfast (yes, no)?
  2. Is there a differences in Hours sleep caused by having breakfast and the tireness (score)

The assignment consists of 6 parts:

  • part 1: load the data
  • part 2: data inspection
  • part 3: check assumptions
    • check normality 3.1
    • check equal variance 3.2
  • part 4: prepare the data
  • part 5: answer the research question
  • part 6: enhanced plotting

Part 1 till 5 are mandatory, part 6 is optional (bonus) Mind you that you cannot copy code without referencing the code. If you copy code you need to be able to explain your code verbally and you will not get the full score.

NOTE If your project data is suitable you can use that data instead of the given data

ANOVA¶

Analysis of variance (ANOVA) compares the variances between groups versus within groups. It basically determines whether the differences between groups is larger than the differences within a group (the noise). A graph picturing this is as follow: https://link.springer.com/article/10.1007/s00424-019-02300-4/figures/2

In ANOVA, the dependent variable must be a continuous (interval or ratio) level of measurement. For instance Glucose level. The independent variables in ANOVA must be categorical (nominal or ordinal) variables. For instance trial category, time of day (AM versus PM) or time of trial (different categories). Like the t-test, ANOVA is also a parametric test and has some assumptions. ANOVA assumes that the data is normally distributed. The ANOVA also assumes homogeneity of variance, which means that the variance among the groups should be approximately equal. ANOVA also assumes that the observations are independent of each other.

A one-way ANOVA has just one independent variable. A two-way ANOVA (are also called factorial ANOVA) refers to an ANOVA using two independent variables. For research question 1 we can use the one-way ANOVA, for research question two we can use two-way ANOVA. But first we need to check the assumptions.

If your data is not normally distributed you might want to look for an alternative. See also https://www.statisticshowto.com/probability-and-statistics/statistics-definitions/parametric-and-non-parametric-data/


Part 1: Load the data (10 pt)¶

load the sleep.csv data. Preferably we read the data not with a hard coded data path but using a config file. See https://fennaf.gitbook.io/bfvm22prog1/data-processing/configuration-files/yaml. Get yourself familiar with the data. Answer the following questions.

  1. What is the percentage missing data?
  2. Considering the research question, what is the dependent variable and what are the indepent variables? Are they of the correct datatype?
In [1]:
import yaml
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
In [58]:
with open('config.yaml') as stream:
    config = yaml.safe_load(stream)
    
df = pd.read_csv(config['file_assignment_4b'])
df.head()
Out[58]:
Enough Hours PhoneReach PhoneTime Tired Breakfast
0 Yes 8.0 Yes Yes 3 Yes
1 No 6.0 Yes Yes 3 No
2 Yes 6.0 Yes Yes 2 Yes
3 No 7.0 Yes Yes 4 No
4 No 7.0 Yes Yes 2 Yes
In [3]:
# code printing percentage missing data

for column in df.columns:
    print(f'Column "{column}" has {sum(df[column].isna())} missing datapoints')
    
print('\nMissing datapoints below:')
print(df[df['Hours'].isna()])

print(f'The percentage of missing data is: {(2/len(df))*100:.2f}%')
Column "Enough" has 0 missing datapoints
Column "Hours" has 2 missing datapoints
Column "PhoneReach" has 0 missing datapoints
Column "PhoneTime" has 0 missing datapoints
Column "Tired" has 0 missing datapoints
Column "Breakfast" has 0 missing datapoints

Missing datapoints below:
   Enough  Hours PhoneReach PhoneTime  Tired Breakfast
65     No    NaN        Yes        No      3       Yes
91    Yes    NaN         No       Yes      2       Yes
The percentage of missing data is: 1.92%
In [4]:
df.groupby('Tired').describe()
Out[4]:
Hours
count mean std min 25% 50% 75% max
Tired
1 4.0 8.000000 0.816497 7.0 7.75 8.0 8.25 9.0
2 26.0 6.653846 1.468123 4.0 6.00 7.0 7.00 9.0
3 39.0 6.717949 0.971941 5.0 6.00 7.0 7.00 9.0
4 23.0 6.739130 1.251086 5.0 6.00 7.0 7.00 10.0
5 10.0 5.700000 2.584140 2.0 4.25 5.5 7.75 9.0
In [5]:
#code printing answer dependent and independent variables
print("""The independent variables are: Breakfast and Tiredness
The dependent variable is Hours (of sleep).""")
The independent variables are: Breakfast and Tiredness
The dependent variable is Hours (of sleep).
In [6]:
#code printing answer about datatypes
print(df.info())

print('''
      In my opinion the datatypes of the columns make sense, however
      when you want to investigate correlations this does not work on 
      string objects. For this, dummy variables can be created''')
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 104 entries, 0 to 103
Data columns (total 6 columns):
 #   Column      Non-Null Count  Dtype  
---  ------      --------------  -----  
 0   Enough      104 non-null    object 
 1   Hours       102 non-null    float64
 2   PhoneReach  104 non-null    object 
 3   PhoneTime   104 non-null    object 
 4   Tired       104 non-null    int64  
 5   Breakfast   104 non-null    object 
dtypes: float64(1), int64(1), object(4)
memory usage: 5.0+ KB
None

      In my opinion the datatypes of the columns make sense, however
      when you want to investigate correlations this does not work on 
      string objects. For this, dummy variables can be created

Part 2: Inspect the data (30 pt)¶

Inspect the data practically. Get an idea about how well the variable categories are ballanced. Are the values of a variable equally divided? What is the mean value of the dependent variable? Are there correlations amongs the variables?

  • Create some meaninful overviews such as variable value counts
  • Create a scatter plot ploting the relation between being tired and hours of sleep with different colors for Breakfast
  • Print some basic statistics about the target (mean, standard deviation)
  • Create a heatmap to check for correlations among variables.
  • Hints
    • the gitbook has a bokeh heatmap example
In [7]:
from bokeh.io import output_notebook
from bokeh.plotting import figure, show
from bokeh.io import output_notebook
from bokeh.layouts import gridplot
from bokeh.transform import jitter

output_notebook()
Loading BokehJS ...
In [27]:
# code your answer to the value counts and distribution plots here
vc_tired = df['Tired'].value_counts()
vc_hours = df['Hours'].value_counts()

p1 = figure(title='Tired, value counts', x_axis_label='Tired score (1 - 5)',
            y_axis_label = 'counts')
p1.vbar(x=vc_tired.index, top=vc_tired.values, width=0.5)

p2 = figure(title='Hours slept, value counts', x_axis_label='Sleep (hours)',
            y_axis_label = 'counts')
p2.vbar(x=vc_hours.index, top=vc_hours.values, width=0.5)

show(gridplot(children=[[p1,p2]], width = 400, height = 400))
In [9]:
p = figure(title = 'Hours slept against tiredness',
           y_axis_label = 'Hours slept (Hours)', 
           x_axis_label = 'Tired (Score, 1-5)')

# breakfast = Yes
p.circle(y=jitter('Hours',width=0.2), x=jitter('Tired', width=0.3), 
         source = df[df['Breakfast'] == 'Yes'],
         color = 'orange', legend_label = 'Breakfast', alpha=0.5, size=10)


# breakfast = No
p.circle(y=jitter('Hours',width=0.2), x=jitter('Tired', width=0.3), 
         source = df[df['Breakfast'] == 'No'],
         color = 'green', legend_label = 'No Breakfast', alpha=0.5, size=10)


show(p)
In [10]:
sns.kdeplot(x='Hours', hue='Tired', data=df, fill = True, palette = 'Set2',)
plt.xlim((1,10))
Out[10]:
(1.0, 10.0)
In [11]:
sns.kdeplot(x=df['Tired'], hue=df['Breakfast'], fill = True, palette='Set1')
plt.xlim(1,5)
plt.xticks([1,2,3,4,5])
;
Out[11]:
''

Below are the same plots as above made with holoview, just as an exercice and to see the difference between different plotting tools.

In [12]:
import hvplot.pandas
df[['Hours','Tired']].hvplot.kde(by='Tired',
                                 legend = 'top_left',
                                 xlim=(1,10),
                                 xticks=list(range(1,11)),
                                 title = 'Density of hours slept statified by tired score')
Out[12]:
In [13]:
import hvplot.pandas
df[['Breakfast','Tired']].hvplot.kde(by='Breakfast',
                                     legend = 'top_left',
                                     xlim=(1,5),
                                     xticks=[1,2,3,4,5],
                                     title = 'Breakfast density given tired score')
Out[13]:
In [14]:
# statistics
In [15]:
df_dummies = pd.get_dummies(df, drop_first=True)
In [16]:
#code your answer for the heatmap here and briefly state your finding

import matplotlib.pyplot as plt
import seaborn as sns

sns.heatmap(df_dummies.corr(), cmap='viridis', annot=True)

print("""
      There seems to be a relatively strong negative correlation between
      'Enough_Yes' and 'Tired', surprisingly there doesn't seem to be
      a strong correlation between tired and hours.
      """)
      There seems to be a relatively strong negative correlation between
      'Enough_Yes' and 'Tired', surprisingly there doesn't seem to be
      a strong correlation between tired and hours.
      

Part 3: Check Assumptions¶

Before we answer the research question with ANOVA we need to check the following assumptions:

  1. ANOVA assumes that the dependent variable is normaly distributed
  2. ANOVA also assumes homogeneity of variance
  3. ANOVA also assumes that the observations are independent of each other. Most of the time we need domain knowledge and experiment setup descriptions to estimate this assumption

We are going to do this graphically and statistically.

Check normality (10 pt)¶

  • Plot the distribution of the dependent variable. Add a vertical line at the position of the average. Add a vertical line for the robuust estimation. Add the normal distribution line to the plot. Comment on the normallity of the data. Do you want the full points? Plot with bokeh!
  • Use a Shapiro-Wilk Test or an Anderson-Darling test to check statistically
Hints
  • check the code of lesson 1 DS1 bayesian statistics
  • heart_failure case of gitbook uses bokeh histograms
In [17]:
df['Hours'].value_counts()
Out[17]:
7.0     35
6.0     24
8.0     16
5.0     12
9.0      8
4.0      4
2.0      2
10.0     1
Name: Hours, dtype: int64
In [29]:
p = figure()

p.line(x=vc_hours.index, y=vc_hours.values)

show(p)
In [19]:
# briefly summarize your findings

Check homogeneity of variance (20 pt)¶

  • Use boxplots for the check of homoegeneity of variance. Do you want the full points? Plot with bokeh!
  • Use a Levene’s & Bartlett’s Test of Equality (Homogeneity) of Variance to test equal variance statistically
In [20]:
 
In [21]:
# your code for the statistical test here
In [22]:
# briefly summarize your findings

Part 4: Prepare your data (10 pt)¶

Create a dataframe with equal samplesize. Make three categories for tireness 1-2 = no, 3 = maybe, 4-5 = yes

In [59]:
equal_df = df
equal_df
Out[59]:
Enough Hours PhoneReach PhoneTime Tired Breakfast
0 Yes 8.0 Yes Yes 3 Yes
1 No 6.0 Yes Yes 3 No
2 Yes 6.0 Yes Yes 2 Yes
3 No 7.0 Yes Yes 4 No
4 No 7.0 Yes Yes 2 Yes
... ... ... ... ... ... ...
99 No 7.0 Yes Yes 2 Yes
100 No 7.0 No Yes 3 Yes
101 Yes 8.0 Yes Yes 3 Yes
102 Yes 7.0 Yes Yes 2 Yes
103 Yes 6.0 Yes Yes 3 Yes

104 rows × 6 columns

In [60]:
equal_df['Tired'] = equal_df['Tired'].map({1:'no', 2:'no',
                                           3:'maybe',
                                           4:'yes', 5:'yes'}) 
equal_df
Out[60]:
Enough Hours PhoneReach PhoneTime Tired Breakfast
0 Yes 8.0 Yes Yes maybe Yes
1 No 6.0 Yes Yes maybe No
2 Yes 6.0 Yes Yes no Yes
3 No 7.0 Yes Yes yes No
4 No 7.0 Yes Yes no Yes
... ... ... ... ... ... ...
99 No 7.0 Yes Yes no Yes
100 No 7.0 No Yes maybe Yes
101 Yes 8.0 Yes Yes maybe Yes
102 Yes 7.0 Yes Yes no Yes
103 Yes 6.0 Yes Yes maybe Yes

104 rows × 6 columns

In [61]:
df
Out[61]:
Enough Hours PhoneReach PhoneTime Tired Breakfast
0 Yes 8.0 Yes Yes maybe Yes
1 No 6.0 Yes Yes maybe No
2 Yes 6.0 Yes Yes no Yes
3 No 7.0 Yes Yes yes No
4 No 7.0 Yes Yes no Yes
... ... ... ... ... ... ...
99 No 7.0 Yes Yes no Yes
100 No 7.0 No Yes maybe Yes
101 Yes 8.0 Yes Yes maybe Yes
102 Yes 7.0 Yes Yes no Yes
103 Yes 6.0 Yes Yes maybe Yes

104 rows × 6 columns

For a reason I can't understand somehow map also mapped the df Tired column.


Part 5: Answer the research questions (20 pt)¶

Hints
  • use one-way ANOVA for research question 1
  • Use two-way ANOVA for research question 2
  • https://reneshbedre.github.io/blog/anova.html
In [24]:
from scipy import stats
stats.anova()

Part 6: Enhanced plotting (20 pt)¶

Create a panel with 1) your dataframe with equal samplesize 2) a picture of a sleeping beauty, 3) the scatter plot of tired / hours of sleep with different colors for Breakfast from part 2 4) the boxplots given the p-value for the anova outcome in the title

In [25]:
#your solution here